Large pretrained language models can easily produce toxic or biased content, which is prohibitive for practical use. In order to detect such toxic generations, existing methods rely on templates, real-world data extraction, crowdsourcing workers, or automatic generation to construct adversarial contexts that are likely to induce toxic generations. However, what type of context is more likely to induce unsafe responses is still under-explored. In this paper, we identify that context toxicity and context category (e.g., \textit{profanity}, \textit{insult}, \textit{drugs}, etc.) are two important factors to cause safety issues in response generation. Hence, we propose a method called \emph{reverse generation} to construct adversarial contexts conditioned on a given response, with the flexibility to control category, toxicity level, and inductivity of the generated contexts. Via reverse generation, we augment the existing BAD dataset and construct a new dataset BAD+ which contains more than 120K diverse and highly inductive contexts in 12 categories. We test three popular pretrained dialogue models (Blender, DialoGPT, and Plato2) and find that BAD+ can largely expose their safety problems. Furthermore, we show that BAD+ can greatly enhance the safety of generation and reveal the key factors of safety improvement. Our code and dataset is available at \url{https://github.com/thu-coai/Reverse_Generation}.
translated by 谷歌翻译
Recently, Transformer-based image restoration networks have achieved promising improvements over convolutional neural networks due to parameter-independent global interactions. To lower computational cost, existing works generally limit self-attention computation within non-overlapping windows. However, each group of tokens are always from a dense area of the image. This is considered as a dense attention strategy since the interactions of tokens are restrained in dense regions. Obviously, this strategy could result in restricted receptive fields. To address this issue, we propose Attention Retractable Transformer (ART) for image restoration, which presents both dense and sparse attention modules in the network. The sparse attention module allows tokens from sparse areas to interact and thus provides a wider receptive field. Furthermore, the alternating application of dense and sparse attention modules greatly enhances representation ability of Transformer while providing retractable attention on the input image.We conduct extensive experiments on image super-resolution, denoising, and JPEG compression artifact reduction tasks. Experimental results validate that our proposed ART outperforms state-of-the-art methods on various benchmark datasets both quantitatively and visually. We also provide code and models at the website https://github.com/gladzhang/ART.
translated by 谷歌翻译
Recent CLIP-guided 3D optimization methods, e.g., DreamFields and PureCLIPNeRF achieve great success in zero-shot text-guided 3D synthesis. However, due to the scratch training and random initialization without any prior knowledge, these methods usually fail to generate accurate and faithful 3D structures that conform to the corresponding text. In this paper, we make the first attempt to introduce the explicit 3D shape prior to CLIP-guided 3D optimization methods. Specifically, we first generate a high-quality 3D shape from input texts in the text-to-shape stage as the 3D shape prior. We then utilize it as the initialization of a neural radiance field and then optimize it with the full prompt. For the text-to-shape generation, we present a simple yet effective approach that directly bridges the text and image modalities with a powerful text-to-image diffusion model. To narrow the style domain gap between images synthesized by the text-to-image model and shape renderings used to train the image-to-shape generator, we further propose to jointly optimize a learnable text prompt and fine-tune the text-to-image diffusion model for rendering-style image generation. Our method, namely, Dream3D, is capable of generating imaginative 3D content with better visual quality and shape accuracy than state-of-the-art methods.
translated by 谷歌翻译
Due to the lack of human resources for mental health support, there is an increasing demand for employing conversational agents for support. Recent work has demonstrated the effectiveness of dialogue models in providing emotional support. As previous studies have demonstrated that seekers' persona is an important factor for effective support, we investigate whether there are benefits to modeling such information in dialogue models for support. In this paper, our empirical analysis verifies that persona has an important impact on emotional support. Therefore, we propose a framework for dynamically inferring and modeling seekers' persona. We first train a model for inferring the seeker's persona from the conversation history. Accordingly, we propose PAL, a model that leverages persona information and, in conjunction with our strategy-based controllable generation method, provides personalized emotional support. Automatic and manual evaluations demonstrate that our proposed model, PAL, achieves state-of-the-art results, outperforming the baselines on the studied benchmark. Our code and data are publicly available at https://github.com/chengjl19/PAL.
translated by 谷歌翻译
运动转移旨在将驱动视频的运动转移到源图像。当驾驶视频中的对象与源图像中的对象之间存在很大差异时,传统的单个域运动转移方法通常会产生显着的伪影。例如,合成的图像可能无法保留源图像的人类形状(参见图1(a))。为了解决这个问题,在这项工作中,我们提出了一种运动和外观适应(MAA)进行跨域运动转移的方法,在该方法中,我们将合成图像中的对象正规化,以捕获驾驶框架中对象的运动,而仍保留对象在源图像中的形状和外观。一方面,考虑合成图像和驾驶框架的对象形状可能有所不同,我们设计了一个形状不变的运动适应模块,该模块可以在两个图像中强制对象零件的角度的一致性来捕获运动信息。另一方面,我们引入了一个结构引导的外观一致性模块,旨在使合成图像的相应贴片和源图像之间的相似性正式化,而不会影响合成图像中学习的运动。我们提出的MAA模型可以通过循环重建损失以端到端的方式进行训练,并最终产生令人满意的运动转移结果(参见图1(b))。我们在人类舞蹈数据集Mixamo-Video上进行了广泛的实验,以便于时尚视频和人脸数据集vox-celeb到cufs;在这两个方面,我们的MAA模型在定量和定性上都优于现有方法。
translated by 谷歌翻译
图像动画旨在使用从驾驶视频中学到的运动来对源图像进行动画映像。当前的最新方法通常使用卷积神经网络(CNN)来预测运动信息,例如运动关键点和相应的局部变换。但是,这些基于CNN的方法并未明确对运动之间的相互作用进行建模。结果,可能会忽略重要的基础运动关系,这可能会导致生成的动画视频中产生明显的伪影。为此,我们提出了一种新方法,即运动变压器,这是基于视觉变压器构建运动估计器的首次尝试。更具体地说,我们在提出的方法中介绍了两种类型的令牌:i)由补丁特征和相应位置编码形成的图像令牌; ii)用运动信息编码的运动令牌。两种类型的令牌都被发送到视觉变压器中,以通过多头自我注意力块促进它们之间的基本相互作用。通过采用此过程,可以更好地学习运动信息以提高模型性能。然后,最终嵌入式运动令牌用于预测相应的运动关键点和局部变换。基准数据集上的广泛实验表明,我们提出的方法为最先进的基准取得了令人鼓舞的结果。我们的源代码将公开可用。
translated by 谷歌翻译
在过去的几年中,基于卷积的神经网络(CNN)的人群计数方法已取得了有希望的结果。但是,对于准确的计数估计,量表变化问题仍然是一个巨大的挑战。在本文中,我们提出了一个多尺度特征聚合网络(MSFANET),可以在某种程度上减轻此问题。具体而言,我们的方法由两个特征聚合模块组成:短聚合(Shortagg)和Skip Contregation(Skipagg)。 Shortagg模块聚集了相邻卷积块的特征。其目的是制作具有从网络底部逐渐融合的不同接收场的功能。 Skipagg模块将具有小型接受场的特征直接传播到具有更大接收场的特征。它的目的是促进特征与大小接收场的融合。尤其是,Skipagg模块引入了Swin Transformer块中的本地自我注意力特征,以结合丰富的空间信息。此外,我们通过考虑不均匀的人群分布来提出基于局部和全球的计数损失。在四个具有挑战性的数据集(Shanghaitech数据集,UCF_CC_50数据集,UCF-QNRF数据集,WorldExpo'10数据集)上进行了广泛的实验,这表明与先前的先前的尚未实行的方法相比,提出的易于实现的MSFANET可以实现有希望的结果。
translated by 谷歌翻译
变压器在自然语言处理中的成功最近引起了计算机视觉领域的关注。由于能够学习长期依赖性,变压器已被用作广泛使用的卷积运算符的替代品。事实证明,这种替代者在许多任务中都取得了成功,其中几种最先进的方法依靠变压器来更好地学习。在计算机视觉中,3D字段还见证了使用变压器来增加3D卷积神经网络和多层感知器网络的增加。尽管许多调查都集中在视力中的变压器上,但由于与2D视觉相比,由于数据表示和处理的差异,3D视觉需要特别注意。在这项工作中,我们介绍了针对不同3D视觉任务的100多种变压器方法的系统和彻底审查,包括分类,细分,检测,完成,姿势估计等。我们在3D Vision中讨论了变形金刚的设计,该设计使其可以使用各种3D表示形式处理数据。对于每个应用程序,我们强调了基于变压器的方法的关键属性和贡献。为了评估这些方法的竞争力,我们将它们的性能与12个3D基准测试的常见非转化方法进行了比较。我们通过讨论3D视觉中变压器的不同开放方向和挑战来结束调查。除了提出的论文外,我们的目标是频繁更新最新的相关论文及其相应的实现:https://github.com/lahoud/3d-vision-transformers。
translated by 谷歌翻译
多个对象跟踪(MOT)是包含检测和关联的任务。大量追踪器已经取得了竞争性能。不幸的是,由于缺乏这些子任务的信息交流,它们通常会偏向两者之一,并且在复杂的情况下,例如预期的虚假负面因素和彼此通过时的目标轨迹错误。在本文中,我们提出了Transfiner,这是一种基于变压器的MOT进行后填充方法。这是一个通用的附件框架,从原始跟踪器作为输入来利用图像和跟踪结果(位置和类预测)作为输入,然后将其用于强大地启动转机精矿。此外,推高器取决于查询对,这些查询对通过融合解码器产生了一对检测和运动,并实现了全面的跟踪改进。我们还通过根据不同的细化水平标记查询对来提供有针对性的改进。实验表明,在MOT17基准测试上,我们的设计是有效的,我们将CenterTrack从67.8%的MOTA和64.7%的IDF1提升到71.5%MOTA和66.8%IDF1。
translated by 谷歌翻译
我们提出了联合隐式功能(UNIF),这是一种基于原始扫描和骨骼作为输入的人类重建和动画的零件方法。先前的基于部分的人重建方法依赖于SMPL的地面零件标签,因此仅限于最小衣服。相比之下,我们的方法学会了将部分与身体运动分开,而不是部分监督,因此可以扩展到穿衣服的人类和其他铰接的物体。我们的分区从动作进行分区是通过以骨骼为中心的初始化,骨限度损失和正常损失来实现的,即使训练姿势受到限制,也可以确保稳定的零件分裂。我们还为SDF提供了最小的周边损失,以抑制额外的表面和部分重叠。我们方法的另一个核心是一种相邻的部分接缝算法,该算法会产生非刚性变形,以维持显着缓解基于部分伪像的部分之间的连接。在该算法下,我们进一步提出了“竞争部分”,该方法通过点对骨骼而不是绝对位置的相对位置定义了重量,从而避免了神经隐式函数的概括性问题(线性混合皮肤)。我们通过在CAPE和ClothSeq数据集上穿衣服的人体重建和动画来证明我们方法的有效性。
translated by 谷歌翻译